709 research outputs found

    Regular subspaces of Dirichlet forms

    Full text link
    The regular subspaces of a Dirichlet form are the regular Dirichlet forms that inherit the original form but possess smaller domains. The two problems we are concerned are: (1) the existence of regular subspaces of a fixed Dirichlet form, (2) the characterization of the regular subspaces if exists. In this paper, we will first research the structure of regular subspaces for a fixed Dirichlet form. The main results indicate that the jumping and killing measures of each regular subspace are just equal to that of the original Dirichlet form. By using the independent coupling of Dirichlet forms and some celebrated probabilistic transformations, we will study the existence and characterization of the regular subspaces of local Dirichlet forms.Comment: This paper is collected in Festschrift Masatoshi Fukushima, In Honor of Masatoshi Fukushima's Sanju, pp: 397-420, 201

    MARKOV CHAIN APPROXIMATIONS FOR ONE DIMENSIONAL DIFFUSIONS

    Full text link

    Selective Memory Recursive Least Squares: Recast Forgetting into Memory in RBF Neural Network Based Real-Time Learning

    Full text link
    In radial basis function neural network (RBFNN) based real-time learning tasks, forgetting mechanisms are widely used such that the neural network can keep its sensitivity to new data. However, with forgetting mechanisms, some useful knowledge will get lost simply because they are learned a long time ago, which we refer to as the passive knowledge forgetting phenomenon. To address this problem, this paper proposes a real-time training method named selective memory recursive least squares (SMRLS) in which the classical forgetting mechanisms are recast into a memory mechanism. Different from the forgetting mechanism, which mainly evaluates the importance of samples according to the time when samples are collected, the memory mechanism evaluates the importance of samples through both temporal and spatial distribution of samples. With SMRLS, the input space of the RBFNN is evenly divided into a finite number of partitions and a synthesized objective function is developed using synthesized samples from each partition. In addition to the current approximation error, the neural network also updates its weights according to the recorded data from the partition being visited. Compared with classical training methods including the forgetting factor recursive least squares (FFRLS) and stochastic gradient descent (SGD) methods, SMRLS achieves improved learning speed and generalization capability, which are demonstrated by corresponding simulation results.Comment: 12 pages, 15 figure

    Real-Time Progressive Learning: Mutually Reinforcing Learning and Control with Neural-Network-Based Selective Memory

    Full text link
    Memory, as the basis of learning, determines the storage, update and forgetting of the knowledge and further determines the efficiency of learning. Featured with a mechanism of memory, a radial basis function neural network (RBFNN) based learning control scheme named real-time progressive learning (RTPL) is proposed to learn the unknown dynamics of the system with guaranteed stability and closed-loop performance. Instead of the stochastic gradient descent (SGD) update law in adaptive neural control (ANC), RTPL adopts the selective memory recursive least squares (SMRLS) algorithm to update the weights of the RBFNN. Through SMRLS, the approximation capabilities of the RBFNN are uniformly distributed over the feature space and thus the passive knowledge forgetting phenomenon of SGD method is suppressed. Subsequently, RTPL achieves the following merits over the classical ANC: 1) guaranteed learning capability under low-level persistent excitation (PE), 2) improved learning performance (learning speed, accuracy and generalization capability), and 3) low gain requirement ensuring robustness of RTPL in practical applications. Moreover, the RTPL based learning and control will gradually reinforce each other during the task execution, making it appropriate for long-term learning control tasks. As an example, RTPL is used to address the tracking control problem of a class of nonlinear systems with RBFNN being an adaptive feedforward controller. Corresponding theoretical analysis and simulation studies demonstrate the effectiveness of RTPL.Comment: 16 pages, 15 figure

    Regional Finance and Regional Disparities in China

    Get PDF
    China’s growth has been spectacularly high and persistent over the last few decades. However, there have been regular expressions of concern about the uneven distribution of the benefits across regions and, at times, it has been asserted that the regional distribution of available investment funds has played an important role – national financial institutions (mainly state-owned banks) have redirected deposits from the inland to loans to large institutions in the more prosperous coastal regions. At the same time, smaller regionally-focussed institutions are likely to improve the distribution of funds. We use a panel data set disaggregated by province for the years 1986 to 2004 to test these propositions. We employ recent panel unit roots and cointegration tests using data for state-owned bank loans as well as loans by rural credit cooperatives. We find that financial disparities are related to output disparities, that this relationship is positive, that it is stronger for rural credit cooperatives than for state-owned banks and that this relationship is causal in both the long and short runs. A reduction in financial disparities can be expected to lead a narrowing of output disparities in the short run and in the long run with the effect being larger for rural credit cooperatives than for state-owned commercial banks.regional disparities, panel econometrics, regional finance, China
    corecore